#ChatGPT e-learning
Explore tagged Tumblr posts
Text
i saw this post and decided that i had some time spare, i could give AI another go. (link to post https://www.tumblr.com/dibelonious/778852078032404480/now-that-ai-made-troubleshooting-ridiculously. dont harass the poor old sod obviously.)
i hear a lot of people irl at uni and some online say ai is great for coding, and so every couple months i try it out. sometimes with a very small project in a popular language (python or c, usually. though im forgiveful with c as everyone fucks up c.), sometimes with something simple (i.e. a couple lines tops with a naive approach if written idiomatically) but in a more unusual language with full documentation online. (like sed! yay!)
but every single time i come to the conclusion that even with being handheld chatgpt could not do what it was asked to do. even if someone tells it every issue in its outputs, itll remember for only one prompt. even if someone tells it the solution, itll find a new way to fuck it up.
below the cut is me trying to get chatgpt to make a working sed script that prints "meowwwwwwwwwwwwwwwww..." (long post warning)
(if anything reads weirdly, this was originally a reblog to the screenshotted post, then i decided to make it its own post. so that may be why.)
i cant remember the last time i ran into an issue that i couldnt fix in like ... 5 minutes. but knowing what chatgpt is like, any ask i give it will give me issues to troubleshoot. (yes this example is code, not linux proper. but its more of the same doing that.)
the other day i decided to write "meowwwwwwwwwwwwwwwwwwwwwwwwwwwww....." in many different languages, after seeing @brainfuck-official do it in BF. (link to post https://www.tumblr.com/brainfuck-official/773510105608192000) as is my blog, i asked it to do this in sed.
great! this script doesnt work! it doesnt even come *close* to working, giving me plenty to try out chatgpt's troubleshooting skills! it also just doesnt make much sense. why the shebang but not making it executable? and why are the flags different (ones -f, ones -nf). also a counter? why though? thats not what im asking for? (you can see tags for a brief explanation on how to add a counter)
after telling it the script doesnt work (and why, something someone troubleshooting likely wont know) it just adds in a P. a command that prints a damn newline. but it lies about it printing a newline.
(if you dont believe it prints a trailing newline and believe the AI instead, just try echo -n foo | sed -n 'P ; P')
anyways it alternated between no print statements and printing with newlines for the next ... 8 prompts, by which time i felt sorry for the poor bugger and told it to use e to print without a newline.
all the while it was trying to be more useful and add a count - making it print my string after n repeats instead of the infinite that i asked for. it was trying to subtract 1 with effectively s/[0-9]+/&-1/ which just appends the string "-1" to a number!
anyways, i tell it to use "the e command". there are three different versions of the e command in sed, and only one of them makes sense here. which did chatgpt use? none! it used the e regex modifier! which executes your pattern hold, then turns the output into the new pattern hold. and does not print anything.
ill just screenshot the last couple interactions minus only the useless exposition it adds to every response so you can see how stupid it is
ignoring sed's requirement for an input this is equivalent to the python
to be fair i never said there shouldnt be infinite meows, and this does have infinite Ws. but come the fuck on. this is clearly not whats being asked for.
#linux is best - yes. but learn to troubleshoot properly.#blindly copying code online without understanding it isnt troubleshooting.#regardless if that code came from stackoverflow or chatgpt.#anyways maybe it wouldve been better to write the equivalent in C with gotos and labels?#but at least everyone knows python#and i dont need to write c this way#also decided to see if it could find any info about me if i give it my name and county of origin#which is identifiable information but its outdated as ive changed my name (trans :3) and moved away.#anyways it thought i was from l*nd*n.#i told it where i was from (West Country. Very Much Not london.) and it thought i was a londoner. what in the hell.#yes if i said the name of most counties to an american online theyd probably think its in london.#but thats before they google the damn place! and this bot has access to the whole internet!#(for the yanks: it did the equivalent of calling an appalachian a californian)#(or at least i think thats close enough. im not really all that sure about what happens over the pond. and i like my ignorance here.)#wait the documentation tells you how to make a counter. at least twice.#IT COULD COPY CODE FROM THE INFO PAGES FOR THE COUNTER AND IT STILL GOT IT WRONG EVEN AFTER BEING TOLD WHY ITS WRONG#oh my god.#anyways in the docs they wanted to print the number. you can just hold n chars and remove one each loop#then break the loop when your hold is empty.#thats the easiest way ive found of looping n times (if you need the hold do this on a prepended line)#(not efficient but you can make it more efficient if you want. the docs explain how to! but its more effort and easy to fuck up soooooo...)#printing n ws though? just use e printf like it bloody demonstrates itself#no need to do inefficient shit in sed when someones written it in c for you.
0 notes
Text
Il lato oscuro dell’Intelligenza Artificiale: una conferenza per riflettere sul futuro. Associazione Cultura e Sviluppo di Alessandria
Una nuova era è iniziata, ma siamo davvero pronti ad affrontarla? Il 21 febbraio 2025 alle ore 18, presso l’Associazione Cultura e Sviluppo di Alessandria, si terrà la conferenza “Il Lato Oscuro dell’Intelligenza Artificiale”, un evento a ingresso libero che invita a una riflessione profonda sulle implicazioni etiche, sociali e ambientali dell’IA.
Alessandria: Una nuova era è iniziata, ma siamo davvero pronti ad affrontarla? Il 21 febbraio 2025 alle ore 18, presso l’Associazione Cultura e Sviluppo di Alessandria, si terrà la conferenza “Il Lato Oscuro dell’Intelligenza Artificiale”, un evento a ingresso libero che invita a una riflessione profonda sulle implicazioni etiche, sociali e ambientali dell’IA. L’accelerazione tecnologica degli��
#AI e futuro#AI e scuola#AI ed educazione#Alessandria today#algoritmi e decisioni#automazione#cambiamenti nel lavoro#Cambiamento climatico#ChatGPT#conferenza Alessandria#conferenza IA#consumi energetici IA#digitale e lavoro#disinformazione IA#docenti e AI#educazione e IA#etica digitale#etica IA#Evento culturale#Futuro del Lavoro#futuro digitale.#Futuro sostenibile#Google News#IA#Impatto ambientale#impatto IA#Innovazione#Intelligenza artificiale#italianewsmedia.com#machine learning
0 notes
Text
How Teachers Can Benefit from AI for Their Teaching- E-learning


With the growing popularity of online education across the USA, teachers and institutions are seeking ways to improve the learning experience. In this digital transformation, Artificial Intelligence (AI) has emerged as a powerful ally, offering solutions that simplify tasks, personalize learning, and reduce operational costs. From online high school diploma programs to e-learning platforms offering college degrees, AI plays a crucial role in enhancing efficiency and engagement. This blog explores how online teachers can leverage AI to benefit their teaching practices, optimize their time, and create better outcomes for students. The Role of AI in Online Education AI is redefining online education by offering tools that enable personalized learning paths, automate administrative tasks, and improve teacher-student interactions. The adoption of AI tools is critical, especially for teachers offering online degrees or managing adult education programs. As educators increasingly adapt to virtual classrooms, AI ensures a seamless experience, automating routine work and freeing up time for meaningful student engagement. AI's value is particularly evident in online teaching positions where managing student performance remotely can be challenging. Virtual assistants, chatbots, and intelligent grading systems help teachers stay organized while also improving communication with students. How AI Supports Teachers in Their Daily Work AI provides several advantages that benefit both teachers and students, making education more accessible and productive. Below are the key ways AI enhances the teaching experience: Personalized Lesson Planning One of the biggest challenges for teachers is designing lessons that meet the diverse needs of students. AI tools help by analyzing learning patterns and generating tailored lesson plans, ensuring that every student can progress at their own pace. This capability is especially useful for teachers offering online high school diplomas and college courses, where students often have varying academic needs. Automated Grading and Assessments Grading exams and assignments manually can be time-consuming. AI platforms simplify this task by automating assessments, providing real-time feedback to students, and identifying areas where students need improvement. This reduces the burden on teachers and helps students stay on track with their learning goals. Improved Engagement with Chatbots and Virtual Assistants AI-powered chatbots act as virtual teaching assistants, answering student queries and providing immediate support. These tools simulate conversations with real instructors, creating a more interactive learning environment. As a result, students feel connected even in online settings, improving their overall learning experience. Cost Savings Through Automation AI technology reduces the need for printed materials, physical classrooms, and manual administrative work. Schools offering online degrees have reported significant cost savings by adopting AI-powered tools. According to recent studies, institutions using AI solutions have seen operational costs drop by up to 20%, primarily due to automation. Comparing Traditional Teaching with AI-Enhanced Learning The table below highlights the key differences between traditional and AI-supported teaching models: Aspect Traditional Teaching AI-Supported Teaching Lesson Planning Manual, time-intensive AI-generated, automated suggestions Student Engagement Classroom-based, group-focused Personalized learning paths Performance Tracking Limited, manual assessment Real-time data analysis and feedback Accessibility Fixed schedules 24/7 learning availability Operational Costs Higher due to physical resources Lower with digital tools Teacher-Student Interaction In-person AI chatbots mimic real conversations As the table demonstrates, AI tools provide clear advantages in terms of flexibility, efficiency, and accessibility. Teachers using AI-supported systems can focus more on creative aspects of teaching, such as mentoring students, while AI handles repetitive tasks. The Impact of AI on Teacher-Student Engagement AI makes it easier for teachers to connect with students by simulating real-time conversations through chatbots and virtual assistants. These tools ensure that students receive immediate responses to their questions, even outside of class hours. Such continuous engagement keeps students motivated and fosters a sense of belonging, even in online environments. Additionally, AI tools help teachers provide detailed feedback quickly. Instead of waiting days for graded assignments, students can receive instant evaluations, allowing them to correct mistakes and improve more efficiently. Using AI to Plan and Optimize Lessons Planning effective lessons is a critical task for any teacher, especially in online education. AI tools not only assist with generating course materials but also analyze student data to suggest improvements in teaching methods. For example, platforms offering high school diplomas online use AI algorithms to identify knowledge gaps among students, enabling teachers to modify lessons accordingly. Teachers can also use AI-based recommendations to curate additional resources for students. Whether it's providing reading materials, video tutorials, or quizzes, AI ensures that students receive personalized learning experiences tailored to their needs. AI’s Positive Impact on Education Systems AI has already begun transforming educational systems by offering real-time insights into student performance and learning outcomes. Teachers can use AI to detect early warning signs of students struggling with their coursework, allowing them to intervene promptly. Research shows that AI adoption in education has improved student success rates by up to 30% within the first year of implementation. In online learning environments, AI tools also promote equity by giving every student access to personalized instruction, regardless of their geographical location. This is particularly beneficial for students enrolled in online college courses or adult high school diploma programs. Cost Reduction Through AI-Driven Education AI reduces educational costs in several ways. First, it automates administrative tasks such as grading, attendance tracking, and course management. Second, it eliminates the need for physical infrastructure, as AI-powered virtual classrooms can operate entirely online. Schools offering online degrees have reported savings of up to 20% within six months of implementing AI systems. Teachers also benefit from these savings, as they can access free or low-cost AI tools that enhance their teaching efficiency. For example, virtual assistants and AI-based content generators allow educators to create engaging lessons without incurring additional expenses. The Future of Teaching with AI The future of teaching will likely involve a blend of human expertise and AI technology. Experts agree that AI will not replace teachers but rather augment their capabilities. Teachers will have more time to focus on mentoring and creative problem-solving as AI handles routine tasks. Surveys reveal that over 70% of educators view AI positively, seeing it as an opportunity to improve their teaching practices rather than a threat. As AI technology continues to evolve, we can expect further innovations that will reshape education. From advanced virtual reality classrooms to AI-driven career counseling, the possibilities are endless. Conclusion: Embrace the Future with AI-Enhanced Teaching AI offers immense benefits for online teachers, from personalized lesson planning to real-time performance tracking and cost savings. As the education sector continues to evolve, embracing AI will be essential for teachers who want to stay ahead of the curve. Whether you're teaching online college courses, managing high school diploma programs for adults, or exploring new e-learning opportunities, AI can help you deliver exceptional education. If you’re ready to explore how AI can enhance your teaching, consider integrating AI tools into your practice today. For more insights, explore the resources available at Yakazai. FAQs How can AI be beneficial to teachers? AI helps teachers save time by automating lesson planning, grading, and administrative tasks. How can AI be used in online education? AI personalizes learning paths, offers real-time feedback, and improves student engagement through chatbots. What is a key reason for educators to adapt to AI technology? AI enhances efficiency, reduces costs, and provides data-driven insights into student performance. How is AI going to change teaching? AI will allow teachers to focus more on creative aspects of teaching, such as mentoring and student development. How do teachers view AI in the classroom? Most teachers see AI as a valuable tool that complements their teaching efforts rather than replacing them. Read the full article
0 notes
Text
You’re absolutely right that there’s clear hyperbole going on in that tweet thread, but you have to be feeling very uncharitable indeed to claim that the worry of ‘reverting to where AI is’ is an indulgent statement that ‘doesn’t mean anything.’ It’s really very easy to understand what the student meant - the exact meaning is explained within that same tweet (along with how they’re defining ‘dumber’). She is referring to loss of ability / practice in thinking critically.
Given that they’re doing this for an assignment I wouldn’t be surprised if they’re drawing that claim from an article like this. Which certainly makes it sound far less knee-jerk and more like a response which has some logical and calm thought behind it. Concerns about impact on critical thinking skills, it turns out, are indeed a totally legitimate concern to have. (Brain atrophy is another thing, of course, which sounds more like a term first year students might bandy around without fully understanding it).
Of course, you’re fully entitled to your interpretation but on that particular point I’d suggest you’re being as reactionary as you’re an accusing them of being and engaging in a very bad faith interpretation. They’re very, very clearly not talking about AI as a field of study. C’mon.
‘But everyone in my peer group that I know knows way more about this’ is a sentiment that will often be true (or its opposite will be). I am constantly surprised at what, in turns out, most people my age don’t know - or what I don’t know that supposedly most people my age do know. Anecdotal evidence is still just anecdotal evidence, after all. And it’s certainly not enough to extrapolate across an entire country, let alone the globe. And if you’re on tumblr you’re more likely to be Online, and far more likely to be aware of all tech issues. You specified you did an intro to computer science and learnt Java and wrote code for a basic AI: of course your experience is atypical for your generation. Most students don’t do that.
And on the other side of that - this tweet thread is also just one group of students! It’s not evidence that all students - or even the majority - were so unaware of the pitfalls of relying on chatGPT to generate accurate info. But from hearing academics talk about their encounters with students and chatGPT in other places, it’s also not a unique experience.
There’s a lot of discussion about chat GPT among academics. Some have students who understand the issues, are skeptical of it, etc. others have massive issues with plagiarism, with students not understanding how it works, etc. the situation can be so drastically different between different unis (or even departments, or even individual classes tbh). This is absolutely not a unique situation in terms of people reporting how little some of their students understand about chatGPT (and that’s not even getting into the issue of how little many academics themselves understand about it - recall that recent incident what a prof tried to fail a whole class bc he asked chatGPT if it could have generated their essays and somehow took its reply to mean that it had generated them?)
Tl;dr: I agree the language here is very dramatic. I agree privacy should be a big concern (though in this context I can also see why it didn’t come up - it’s not relevant to how accurately chatGPT wrote an essay). I think it’s also very dramatic to suggest that what the students understand now is worse than their previous total ignorance.

#unrelated anecdote when the general public first started exploring chatGPT#I saw a language learner posting about how they had tested it to see if it could help them by chatting with them in their target language#and correcting their mistakes in its replies#it was interested but it did not catch all their mistakes which makes it dubiously useful for that#like depending on how good you want to get at your target language#it not catching your errors means you compound them#but like prolly fine if you just wanna learn some phrases for a holiday or w/e#anyway point was some knobend programmer started having a go at them#how dare she be using his precious chatGPT for that when HE needed it for generating code!#my guy you managed to do your job just fine without it last year#is it useful? yes. is it essential? no. do you have a monopoly on it? no.#just made me laugh that someone in tech was outraged at someone daring to use chatGPT for Not Coding
83K notes
·
View notes
Text
Transforming Justice: How Technology Is Reshaping the Legal System
Technology has profoundly transformed the legal system in areas ranging from the administration of justice to legal research, case management, and courtroom procedures. Here are some significant ways technology has changed the legal system:
1. Improved Access to Legal Resources
Legal Research: Digital databases like Westlaw, LexisNexis, and Google Scholar have made vast collections of legal texts, case law, statutes, and other resources instantly available. Lawyers and researchers now access precedents and statutes more quickly, improving accuracy and efficiency.
Public Access to Law: Court rulings and legal information are often available to the public online, which has democratized access to legal knowledge and empowered people to better understand their rights and legal processes.
2. Enhanced Case Management and Efficiency
Electronic Filing (e-Filing): Many jurisdictions have implemented electronic case filing systems, reducing the need for physical paperwork and facilitating faster, more organized case management. E-filing improves access, helps eliminate errors, and allows documents to be retrieved easily.
Case Management Software: Technology has streamlined case tracking and scheduling. Software helps legal professionals manage deadlines, client information, billing, and scheduling, which significantly reduces administrative burdens.
3. Artificial Intelligence and Legal Analytics
Legal Research and Document Analysis: AI-based tools are now capable of rapidly scanning legal documents, identifying key cases, analyzing trends, and predicting case outcomes based on historical data. Machine learning algorithms help law firms and courts identify relevant legal issues, which saves time and improves strategic decision-making.
Predictive Analytics: Analytics help lawyers anticipate how certain judges might rule, estimate the likelihood of case outcomes, and determine the best course of action in complex cases.
4. Virtual Courtrooms and Remote Proceedings
Remote Hearings: The COVID-19 pandemic accelerated the adoption of virtual courtrooms, allowing hearings and even trials to be held over video conferencing platforms. This change has reduced the need for travel, increased access for remote clients, and streamlined procedures.
Evidence Presentation: Digital tools allow lawyers to present evidence in new ways, using multimedia, 3D renderings, or interactive presentations. Courtrooms with advanced audio-visual systems make it easier to share digital evidence with juries and judges.
5. Automation of Routine Tasks
Document Automation: Legal professionals use automation to draft standard documents such as contracts, wills, or real estate forms. This saves time and reduces the risk of human error.
Client Interaction: Chatbots and AI-driven customer service are beginning to handle basic client inquiries, legal information dissemination, and intake procedures, freeing up lawyers to focus on more complex cases.
6. Impact on Privacy and Cybersecurity
Data Protection: The digitization of sensitive legal records has raised new concerns about privacy and security. Laws like GDPR and frameworks for legal data protection now require law firms and courts to protect client information, ensure confidentiality, and implement stringent cybersecurity practices.
Digital Evidence Handling: Cybersecurity issues also extend to digital evidence, as courts and law firms must ensure that evidence is not tampered with. Chain-of-custody protocols for digital evidence are now more complex, involving secure digital storage, encrypted transfer, and blockchain applications for tracking.
7. Online Dispute Resolution (ODR)
ODR platforms allow parties to resolve disputes without a physical courtroom. Mediations and arbitrations can be completed via video conferencing or dedicated platforms, reducing costs and speeding up the resolution process. Examples include platforms for consumer disputes and debt recovery.
8. Ethical and Regulatory Challenges
Legal Ethics and AI: The rise of AI in law raises questions about the ethical limits of machine-driven legal analysis. There are ongoing debates about AI’s ability to maintain impartiality, avoid bias, and respect clients' rights.
New Areas of Law: Emerging technologies, like blockchain, cryptocurrency, and AI, have led to new legal fields, requiring legislation and specialized regulation. For example, laws addressing data privacy, intellectual property in AI-generated works, and liability for autonomous systems have been necessary to keep pace with technological advances.
9. Globalization and Cross-Border Legal Issues
International Collaboration: Technology enables easier cross-border collaboration, allowing lawyers in different jurisdictions to work together on international cases. This has been essential for areas like international business, intellectual property, and human rights law.
Harmonization of Legal Standards: Digital legal systems are promoting the standardization of some legal practices and principles across jurisdictions, creating efficiencies in international law and facilitating global business.
10. Training and Legal Education
Online Education and Certification: Law schools and continuing legal education programs offer online courses, making legal education more accessible. Lawyers can gain certifications remotely, while interactive tools like virtual reality are being used for experiential learning in courtroom practice.
Technology has brought major changes to the legal system, increasing efficiency, access, and accuracy. However, it also brings challenges, particularly in privacy, security, and the ethical use of AI. As technology continues to advance, it will likely reshape legal norms, processes, and the very concept of justice.
#philosophy#epistemology#knowledge#learning#education#chatgpt#ethics#Technology in Law#Digital Courtroom#Legal AI and Analytics#E-filing and Case Management#Online Dispute Resolution#Legal Privacy and Cybersecurity#Legal Education
1 note
·
View note
Text
Unlock the Potential of AI in Education (Literature Review): Transformative Insights and Future Directions
Discover how AI is transforming education and creating personalised, inclusive learning experiences in Aotearoa New Zealand. Dive into the latest advancements and future directions!
The Impact of AI in Education Literature Review Artificial Intelligence (AI) has emerged as a transformative force in various sectors, and education is no exception. With the ability to personalise learning, automate administrative tasks, and enhance educational outcomes, AI holds substantial promise for revolutionising the educational landscape. This post summarises the literature review…
#adaptive learning#AI ethics#AI tools#Artificial Intelligence#ChatGPT#Cultural Sensitivity#DALL-E#educational technology#future of education#Graeme Smith#here you go: AI in education#Higher education#Inclusive Education#Learning analytics#Michael Grawe#Microsoft CoPilot#Māori Education#neurodiverse learners#Pacific education#personalised learning#Professional Development#Sure#Tertiary Education#thisisgraeme
0 notes
Text
Artificial Intelligence In Digital Marketing-E-book
Being smart in business means knowing what’s just around the corner. It means thinking ahead and preparing for inevitable changes that will impact the way business is conducted.
This is what allows a business to be resilient and to thrive in a changing environment.
Digital marketing is no different.
It’s affecting the way that SEO works, the tools and software we use, and the way that ads are displayed.
As digital marketers, that means thinking about things that could impact on the face of marketing.
Artificial Intelligence (AI) and machine learning have the potential to completely change the face of internet marketing, rendering many older strategies obsolete even.
With this ebook you will:
Gain a crystal ball with which to gaze into the future of internet marketing.
Be better prepared and in a better position than 99.9% of other marketers.
Examine a large number of different types of AI and machine learning in the context of digital marketing.
Will be ensured that your websites manage to hold their position in the SERPs.
Create endless amounts of content in a second.
Topics covered:
What Is AI And Machine Learning?
Google As An AI-First Company
Preparing For Semantic Search
Big Data
Computer Vision
Advertising
Email Marketing
Chatbots
Developing Your AI Skills – Using SQL
How To Future Proof Your Marketing
0 notes
Text
There is no such thing as AI.
How to help the non technical and less online people in your life navigate the latest techbro grift.
I've seen other people say stuff to this effect but it's worth reiterating. Today in class, my professor was talking about a news article where a celebrity's likeness was used in an ai image without their permission. Then she mentioned a guest lecture about how AI is going to help finance professionals. Then I pointed out, those two things aren't really related.
The term AI is being used to obfuscate details about multiple semi-related technologies.
Traditionally in sci-fi, AI means artificial general intelligence like Data from star trek, or the terminator. This, I shouldn't need to say, doesn't exist. Techbros use the term AI to trick investors into funding their projects. It's largely a grift.
What is the term AI being used to obfuscate?
If you want to help the less online and less tech literate people in your life navigate the hype around AI, the best way to do it is to encourage them to change their language around AI topics.
By calling these technologies what they really are, and encouraging the people around us to know the real names, we can help lift the veil, kill the hype, and keep people safe from scams. Here are some starting points, which I am just pulling from Wikipedia. I'd highly encourage you to do your own research.
Machine learning (ML): is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. (This is the basis of most technologically people call AI)
Language model: (LM or LLM) is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. (This would be your ChatGPT.)
Generative adversarial network (GAN): is a class of machine learning framework and a prominent framework for approaching generative AI. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. (This is the source of some AI images and deepfakes.)
Diffusion Models: Models that generate the probability distribution of a given dataset. In image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. (This is the more common technology behind AI images, including Dall-E and Stable Diffusion. I added this one to the post after as it was brought to my attention it is now more common than GANs.)
I know these terms are more technical, but they are also more accurate, and they can easily be explained in a way non-technical people can understand. The grifters are using language to give this technology its power, so we can use language to take it's power away and let people see it for what it really is.
12K notes
·
View notes
Text
If there existed a human brain that could sense neither light nor sound nor odor nor touch, but only text, then one could rightly say that it was "predicting the next word", however, as things stand, equivalences between LLMs and brains are disingenuous. One could argue with some plausibility that image-generation-models are analogous to mental imagery, yet, as others have noted, no one seems to be getting in a tizzy and claiming that DALL-E and Stable Diffusion are conscious, presumably because those are much harder to anthropomorphize
1 note
·
View note
Text
youtube
What is artificial general intelligence? Introduction to Artificial General Intelligence Subscribe to our channel to get the latest updates: / @theexpertinsights
#artificialintelligence#machinelearning#ai#chatgpt#artificialgeneralintelligence#evolution#digitalliteracy#onlinelearning#digitaltools#education#computervision#space#Open education#digital education#generative ai#educationalresources#e learning#healthcare#finance#social issues#medicine#Youtube
1 note
·
View note
Text
Ieri ho avuto una telefonata con una dei docenti del corso universitario dove insegnerò a Settembre, ed è stata la goccia finale su un vaso colmo di sfiducia verso il sistema, al punto tale che non so se continuerò questa esperienza. E francamente parlando, anche se quella che sto per raccontare non ne è la causa, capisco anche perché non si trovano più insegnanti, a queste condizioni non accetto nemmeno io di esserlo, porterò alla fine questa avventura e amen.
E' da settimane che mi confronto con altre persone che insegnano, come il mio amico che insegna alla Facoltà di Ingegneria alla Federico II, e altri docenti in Italia e qui, anche perché io ho sempre voglia di imparare da persone che si fanno il mazzo da secoli su queste cose e ne hanno da dirtene, ma stavolta sto imparando che non è cosa
Ieri la frase che mi ha gelato di più è stata
non esagerare che i ragazzi poi si annoiano
Chi mi conosce sa che io adoro insegnare, e chi mi ha seguito su @papero-learning sa che faccio sempre di tutto per rendere digeribili concetti che non sono alla portata quotidiana di tutti (se poi ci riesco è un altro paio di maniche, ma lo sforzo c'è), ma qua sta avvenendo un cambio di paradigma che, a mio parere, è molto pericoloso, e se questo cambio è dovuto ad un salto generazionale inevitabile, allora vuol dire che ci siamo scollati, e io non intendo contribuire a 'sta pagliacciata.
Io trovo inaccettabile che chi ha deciso volontariamente di iscriversi ad un corso universitario specializzante trovi "noiosa" la teoria. Sì, vero, ci sono dei prof di merda, come ci sono dei colleghi di merda, dei capi di merda, amen. Si può rendere qualsiasi contenuto interessante, ma non a scapito della conoscenza approfondita di un argomento che un corso deve fornire. Un concetto come il massimo comune denominatore può essere raccontato a mo' di Superquark, anzi, deve essere così per tutti coloro che non hanno scelto di fare dell'algebra la loro ragione di vita, ma chi ha deciso altrimenti si deve studiare tutte le cazzo di proprietà, e se un ragazzo di 20 anni ancora non ha compreso che quelle cose apparentemente fini a se stesse fanno parte di quei tanti piccoli mattoni che compongono l'impalcatura di una professione futura, beh, allora tanto vale che vada a rubare o che si faccia spiegare le cose da ChatGPT. E' un discorso da vecchio di merda? Sì, boh, non lo so, e anche se fosse me ne fotto.
Tutti questi ragionamenti me li sarei tenuti per me, o ci avrei scritto sopra molto più in là, ma (sarà che a volte si allineano i pianeti) il reblog di @kon-igi al post di @nusta stamattina ha dato fuoco a quella mia lunga coda di paglia formatasi in queste settimane di confronto con le persone di cui parlavo prima. Sì, i post parlavano d'altro, però boh, io ci ho visto alla radice una matrice che, sebbene io condivida in linee generali quello che ha scritto Kon, non accetto più quando quel tipo di discorso inizia ad infiltrarsi subdolamente in aree dove la velocità non è ammissibile e l'intensità per me è solo sinonimo di approfondimento. Ripeto ancora, il reblog era molto probabilmente inteso per altri contesti, ma io temo che ormai ci si stia arrendendo al fatto che o è tutta una tiktokata (=> romanticizzazione e spettacolarizzazione), anche la scienza, o non se ne fa più nulla.
La tizia della frase sopra in corsivo, prima di organizzare il nostro incontro, mi aveva scritto nell'email
due to the lack of programming skills we started learning programming Java and did only data structures like array list, linked lists and binary trees
Il suo corso è un corso del secondo anno di teoria dell'informazione, e scrivere due to the lack of programming skills è un fallimento su tutta la linea, e non ce l'ho con lei, perché con chi parli parli sembra che sia tutto così, e io mi trovo a dovermi fare carico di concetti avanzati (il mio corso è di programmazione avanzata) col problema che non capiranno un cazzo, perché, per non essere troppo noiosi, i miei colleghi hanno dovuto derogare la qualità dell'insegnamento a favore di, come vogliamo chiamarla, una gita al parco a scrivere un paio di if-then-else?
Tutto questo sfogo non è inteso per mettermi dalla parte della ragione, è solo uno sfogo per aiutarmi ad accettare il fatto che non appartengo più ad un mondo che è andato troppo avanti per me, e di adeguarmi sinceramente non ne ho voglia, soprattutto perché lo trovo deontologicamente parlando una bestemmia, lascio il posto a persone più capaci.
41 notes
·
View notes
Text
One way to spot patterns is to show AI models millions of labelled examples. This method requires humans to painstakingly label all this data so they can be analysed by computers. Without them, the algorithms that underpin self-driving cars or facial recognition remain blind. They cannot learn patterns.
The algorithms built in this way now augment or stand in for human judgement in areas as varied as medicine, criminal justice, social welfare and mortgage and loan decisions. Generative AI, the latest iteration of AI software, can create words, code and images. This has transformed them into creative assistants, helping teachers, financial advisers, lawyers, artists and programmers to co-create original works.
To build AI, Silicon Valley’s most illustrious companies are fighting over the limited talent of computer scientists in their backyard, paying hundreds of thousands of dollars to a newly minted Ph.D. But to train and deploy them using real-world data, these same companies have turned to the likes of Sama, and their veritable armies of low-wage workers with basic digital literacy, but no stable employment.
Sama isn’t the only service of its kind globally. Start-ups such as Scale AI, Appen, Hive Micro, iMerit and Mighty AI (now owned by Uber), and more traditional IT companies such as Accenture and Wipro are all part of this growing industry estimated to be worth $17bn by 2030.
Because of the sheer volume of data that AI companies need to be labelled, most start-ups outsource their services to lower-income countries where hundreds of workers like Ian and Benja are paid to sift and interpret data that trains AI systems.
Displaced Syrian doctors train medical software that helps diagnose prostate cancer in Britain. Out-of-work college graduates in recession-hit Venezuela categorize fashion products for e-commerce sites. Impoverished women in Kolkata’s Metiabruz, a poor Muslim neighbourhood, have labelled voice clips for Amazon’s Echo speaker. Their work couches a badly kept secret about so-called artificial intelligence systems – that the technology does not ‘learn’ independently, and it needs humans, millions of them, to power it. Data workers are the invaluable human links in the global AI supply chain.
This workforce is largely fragmented, and made up of the most precarious workers in society: disadvantaged youth, women with dependents, minorities, migrants and refugees. The stated goal of AI companies and the outsourcers they work with is to include these communities in the digital revolution, giving them stable and ethical employment despite their precarity. Yet, as I came to discover, data workers are as precarious as factory workers, their labour is largely ghost work and they remain an undervalued bedrock of the AI industry.
As this community emerges from the shadows, journalists and academics are beginning to understand how these globally dispersed workers impact our daily lives: the wildly popular content generated by AI chatbots like ChatGPT, the content we scroll through on TikTok, Instagram and YouTube, the items we browse when shopping online, the vehicles we drive, even the food we eat, it’s all sorted, labelled and categorized with the help of data workers.
Milagros Miceli, an Argentinian researcher based in Berlin, studies the ethnography of data work in the developing world. When she started out, she couldn’t find anything about the lived experience of AI labourers, nothing about who these people actually were and what their work was like. ‘As a sociologist, I felt it was a big gap,’ she says. ‘There are few who are putting a face to those people: who are they and how do they do their jobs, what do their work practices involve? And what are the labour conditions that they are subject to?’
Miceli was right – it was hard to find a company that would allow me access to its data labourers with minimal interference. Secrecy is often written into their contracts in the form of non-disclosure agreements that forbid direct contact with clients and public disclosure of clients’ names. This is usually imposed by clients rather than the outsourcing companies. For instance, Facebook-owner Meta, who is a client of Sama, asks workers to sign a non-disclosure agreement. Often, workers may not even know who their client is, what type of algorithmic system they are working on, or what their counterparts in other parts of the world are paid for the same job.
The arrangements of a company like Sama – low wages, secrecy, extraction of labour from vulnerable communities – is veered towards inequality. After all, this is ultimately affordable labour. Providing employment to minorities and slum youth may be empowering and uplifting to a point, but these workers are also comparatively inexpensive, with almost no relative bargaining power, leverage or resources to rebel.
Even the objective of data-labelling work felt extractive: it trains AI systems, which will eventually replace the very humans doing the training. But of the dozens of workers I spoke to over the course of two years, not one was aware of the implications of training their replacements, that they were being paid to hasten their own obsolescence.
— Madhumita Murgia, Code Dependent: Living in the Shadow of AI
71 notes
·
View notes
Note
I used Duolingo to learn some Italian despite being very put off by their continuous reliance on ai. The more ai is involved the less natural it feels, as is ofc expected, but can you tell me how often you actually ask "Vero?" At the end of a statement? Because the way Duolingo presents it it seems to be the same frequency as an English person might say "isn't it?"
Example:
Duolingo will constantly have me translate sentences like "you live with your friend Chiara, right?", "you have a red jacket, right?"
And I just wanna make sure that that is the more common way of asking something rather than asking "Do you have a red jacket?" Which according to Duolingo translates in Italian to the statement "you have a red jacket." But like asked as a question. Which is fine and also a thing in English but I just wanna make sure because the amount of times it has me put vero? At the end of sentences is. A lot
Short answer: we don't use it that often, the most common way imho is "no?".
Very long answer: we do say "vero?" at the end of a sentence to seek confirmation, but it doesn't have the same frequency as English "right?" or UK English "innit?". See it more as a "correct?", "is that right?". Something that's closer to the frequency of "right?" in my opinion is "no?". Examples: "Vivi con la tua amica Chiara, no?" "Hai una giacca rossa, no?". Some people use it *a lot* in the same way that some English speakers use the phrase "you know". Example: "Ieri sono andato in quel bar, no?, e c'era Beatrice, sai, l'ex di Luca, sai no?". ("sai" is basically the same as "you know")
You can also skip the interjection altogether and phrase it as a question, yes. "Vivi con la tua amica Chiara?" "Hai una giacca rossa?". This changes intonation, which in Italian is very useful for understanding whether something was said or asked. If you say "[sentence], no?" the sentence is uttered like a normal statement (downward inflection at the end of the sentence), then the "no?" has a raised intonation. If you say "[sentence]?" you keep a moderately high intonation throughout and then raise it more at the end. If you're familiar with Spanish, that's what the upside-down question mark ¿ is for in Spanish: it tells you where you have to start raising the intonation.
About Duolingo... I'm so sorry that what used to be, and still is, the go-to app/service for learning new languages, has ended up not being the best resource anymore. I haven't used Duolingo in years because it actually stopped being useful to me, before it started using generative AI to generate its sentences and for other uses. The truth is that Duolingo is still a tool that's very easy to use, low-effort, and that gives you a lot of base knowledge. I don't reprimand anyone for using it, but if someone asked me directly, I'd certainly recommend something else.
The repeated "vero?" is one of the problems I have with Duolingo, honestly. By repeating a certain word several times to the point of exhaustion (at least for me), it kind of inflates the frequency that that word actually has in the normal spoken language. I'm not familiar with what specific kind of generative AI Duolingo uses, but I study NLP and LLMs. A widespread and well-known problem of LLMs is that they tend to collapse into short sentences and into repeating the same words over and over, when not trained extensively against this. You don't see this very often with commercial LLMs like ChatGPT or whatnot, but it's because they have been trained *a lot*. If you take a fresh untrained or lightly trained model you can rest assured it's going to spout the classic "with the method and the method and the method and the method and the" within 5 minutes. Another problem is, due to how LLM word distribution (and temperature) works, LLMs often use certain uncommon words at a higher frequency than what's considered normal. These problems are, again, very well-known and the reason why I would never put genAI in charge of a language learning service such as Duolingo if it didn't have extensive human-based feedback behind it, which they unfortunately lack in a lot of languages.
32 notes
·
View notes
Text
Gender and Environmental Impact: Analyzing Consumer Habits
Waste generation and consumer habits are not inherently gender-specific. Both men and women can contribute to waste generation based on their lifestyle choices, consumption patterns, and cultural factors. Waste is more related to individual behavior and societal trends than to gender. However, there may be some consumer habits that are more commonly associated with one gender or the other, but these are generalizations and do not apply universally:
Consumer Habits Not Exclusive to Either Gender:
Single-Use Plastics: Both men and women use products like plastic water bottles, disposable cutlery, and plastic bags that contribute to single-use plastic waste.
Electronics and E-Waste: Electronic devices and gadgets, such as smartphones and computers, contribute to electronic waste (e-waste) when disposed of improperly. This is not gender-specific.
Fast Fashion: The fashion industry generates significant textile waste, and consumers of all genders may contribute to this issue through frequent clothing purchases.
Consumer Habits Often Associated with Women:
Cosmetics and Beauty Products: Women may use cosmetics, skincare, and personal care products, which can sometimes be packaged in non-recyclable or excessive packaging.
Fashion: Some women may be more involved in fashion trends and fast fashion, which can lead to clothing waste.
Consumer Habits Often Associated with Men:
Electronics and Gadgets: Men may be more inclined to purchase electronic gadgets, which can contribute to e-waste if not recycled properly.
Automobiles and Mechanical Equipment: Men may be more likely to engage in hobbies or professions that involve machinery and vehicles, which can generate waste like used motor oil and parts.
It's important to note that these associations are based on stereotypes and do not apply to all individuals. People of all genders can make sustainable choices to reduce waste, such as opting for eco-friendly products, recycling, and reducing unnecessary consumption. Environmental responsibility is a shared goal that transcends gender, and addressing waste issues requires collective action and awareness regardless of gender.
#economics#politics#knowledge#learning#education#ethics#psychology#chatgpt#Gender and waste#Consumer habits#Environmental impact#Waste generation#Sustainability#Gender stereotypes#E-waste#Fashion industry#Plastic waste#Cosmetics and packaging#Electronics consumption#Sustainable living
0 notes
Text
oh no she's talking about AI some more
to comment more on the latest round of AI big news (guess I do have more to say after all):
chatgpt ghiblification
trying to figure out how far it's actually an advance over the state of the art of finetunes and LoRAs and stuff in image generation? I don't keep up with image generation stuff really, just look at it occasionally and go damn that's all happening then, but there are a lot of finetunes focusing on "Ghibli's style" which get it more or less well. previously on here I commented on an AI video model generation that patterned itself on Ghibli films, and video is a lot harder than static images.
of course 'studio Ghibli style' isn't exactly one thing: there are stylistic commonalities to many of their works and recurring designs, for sure, but there are also details that depend on the specific character designer and film in question in large and small ways (nobody is shooting for My Neighbours the Yamadas with this, but also e.g. Castle in the Sky does not look like Pom Poko does not look like How Do You Live in a number of ways, even if it all recognisably belongs to the same lineage).
the interesting thing about the ghibli ChatGPT generations for me is how well they're able to handle simplification of forms in image-to-image generation, often quite drastically changing the proportions of the people depicted but recognisably maintaining correspondence of details. that sort of stylisation is quite difficult to do well even for humans, and it must reflect quite a high level of abstraction inside the model's latent space. there is also relatively little of the 'oversharpening'/'ringing artefact' look that has been a hallmark of many popular generators - it can do flat colour well.
the big touted feature is its ability to place text in images very accurately. this is undeniably impressive, although OpenAI themeselves admit it breaks down beyond a certain point, creating strange images which start out with plausible, clean text and then it gradually turns into AI nonsense. it's really weird! I thought text would go from 'unsolved' to 'completely solved' or 'randomly works or doesn't work' - instead, here it feels sort of like the model has a certain limited 'pipeline' for handling text in images, but when the amount of text overloads that bandwidth, the rest of the image has to make do with vague text-like shapes! maybe the techniques from that anthropic thought-probing paper might shed some light on how information flows through the model.
similarly the model also has a limit of scene complexity. it can only handle a certain number of objects (10-20, they say) before it starts getting confused and losing track of details.
as before when they first wired up Dall-E to ChatGPT, it also simply makes prompting a lot simpler. you don't have to fuck around with LoRAs and obtuse strings of words, you just talk to the most popular LLM and ask it to perform a modification in natural language: the whole process is once again black-boxed but you can tell it in natural language to make changes. it's a poor level of control compared to what artists are used to, but it's still huge for ordinary people, and of course there's nothing stopping you popping the output into an editor to do your own editing.
not sure the architecture they're using in this version, if ChatGPT is able to reason about image data in the same space as language data or if it's still calling a separate image model... need to look that up.
openAI's own claim is:
We trained our models on the joint distribution of online images and text, learning not just how images relate to language, but how they relate to each other. Combined with aggressive post-training, the resulting model has surprising visual fluency, capable of generating images that are useful, consistent, and context-aware.
that's kind of vague. not sure what architecture that implies. people are talking about 'multimodal generation' so maybe it is doing it all in one model? though I'm not exactly sure how the inputs and outputs would be wired in that case.
anyway, as far as complex scene understanding: per the link they've cracked the 'horse riding an astronaut' gotcha, they can do 'full glass of wine' at least some of the time but not so much in combination with other stuff, and they can't do accurate clock faces still.
normal sentences that we write in 2025.
it sounds like we've moved well beyond using tools like CLIP to classify images, and I suspect that glaze/nightshade are already obsolete, if they ever worked to begin with. (would need to test to find out).
all that said, I believe ChatGPT's image generator had been behind the times for quite a long time, so it probably feels like a bigger jump for regular ChatGPT users than the people most hooked into the AI image generator scene.
of course, in all the hubbub, we've also already seen the white house jump on the trend in a suitably appalling way, continuing the current era of smirking fascist political spectacle by making a ghiblified image of a crying woman being deported over drugs charges. (not gonna link that shit, you can find it if you really want to.) it's par for the course; the cruel provocation is exactly the point, which makes it hard to find the right tone to respond. I think that sort of use, though inevitable, is far more of a direct insult to the artists at Ghibli than merely creating a machine that imitates their work. (though they may feel differently! as yet no response from Studio Ghibli's official media. I'd hate to be the person who has to explain what's going on to Miyazaki.)
google make number go up
besides all that, apparently google deepmind's latest gemini model is really powerful at reasoning, and also notably cheaper to run, surpassing DeepSeek R1 on the performance/cost ratio front. when DeepSeek did this, it crashed the stock market. when Google did... crickets, only the real AI nerds who stare at benchmarks a lot seem to have noticed. I remember when Google releases (AlphaGo etc.) were huge news, but somehow the vibes aren't there anymore! it's weird.
I actually saw an ad for google phones with Gemini in the cinema when i went to see Gundam last week. they showed a variety of people asking it various questions with a voice model, notably including a question on astrology lmao. Naturally, in the video, the phone model responded with some claims about people with whatever sign it was. Which is a pretty apt demonstration of the chameleon-like nature of LLMs: if you ask it a question about astrology phrased in a way that implies that you believe in astrology, it will tell you what seems to be a natural response, namely what an astrologer would say. If you ask if there is any scientific basis for belief in astrology, it would probably tell you that there isn't.
In fact, let's try it on DeepSeek R1... I ask an astrological question, got an astrological answer with a really softballed disclaimer:
Individual personalities vary based on numerous factors beyond sun signs, such as upbringing and personal experiences. Astrology serves as a tool for self-reflection, not a deterministic framework.
Ask if there's any scientific basis for astrology, and indeed it gives you a good list of reasons why astrology is bullshit, bringing up the usual suspects (Barnum statements etc.). And of course, if I then explain the experiment and prompt it to talk about whether LLMs should correct users with scientific information when they ask about pseudoscientific questions, it generates a reasonable-sounding discussion about how you could use reinforcement learning to encourage models to focus on scientific answers instead, and how that could be gently presented to the user.
I wondered if I'd asked it instead to talk about different epistemic regimes and come up with reasons why LLMs should take astrology into account in their guidance. However, this attempt didn't work so well - it started spontaneously bringing up the science side. It was able to observe how the framing of my question with words like 'benefit', 'useful' and 'LLM' made that response more likely. So LLMs infer a lot of context from framing and shape their simulacra accordingly. Don't think that's quite the message that Google had in mind in their ad though.
I asked Gemini 2.0 Flash Thinking (the small free Gemini variant with a reasoning mode) the same questions and its answers fell along similar lines, although rather more dry.
So yeah, returning to the ad - I feel like, even as the models get startlingly more powerful month by month, the companies still struggle to know how to get across to people what the big deal is, or why you might want to prefer one model over another, or how the new LLM-powered chatbots are different from oldschool assistants like Siri (which could probably answer most of the questions in the Google ad, but not hold a longform conversation about it).
some general comments
The hype around ChatGPT's new update is mostly in its use as a toy - the funny stylistic clash it can create between the soft cartoony "Ghibli style" and serious historical photos. Is that really something a lot of people would spend an expensive subscription to access? Probably not. On the other hand, their programming abilities are increasingly catching on.
But I also feel like a lot of people are still stuck on old models of 'what AI is and how it works' - stochastic parrots, collage machines etc. - that are increasingly falling short of the more complex behaviours the models can perform, now prediction combines with reinforcement learning and self-play and other methods like that. Models are still very 'spiky' - superhumanly good at some things and laughably terrible at others - but every so often the researchers fill in some gaps between the spikes. And then we poke around and find some new ones, until they fill those too.
I always tried to resist 'AI will never be able to...' type statements, because that's just setting yourself up to look ridiculous. But I will readily admit, this is all happening way faster than I thought it would. I still do think this generation of AI will reach some limit, but genuinely I don't know when, or how good it will be at saturation. A lot of predicted 'walls' are falling.
My anticipation is that there's still a long way to go before this tops out. And I base that less on the general sense that scale will solve everything magically, and more on the intense feedback loop of human activity that has accumulated around this whole thing. As soon as someone proves that something is possible, that it works, we can't resist poking at it. Since we have a century or more of science fiction priming us on dreams/nightmares of AI, as soon as something comes along that feels like it might deliver on the promise, we have to find out. It's irresistable.
AI researchers are frequently said to place weirdly high probabilities on 'P(doom)', that AI research will wipe out the human species. You see letters calling for an AI pause, or papers saying 'agentic models should not be developed'. But I don't know how many have actually quit the field based on this belief that their research is dangerous. No, they just get a nice job doing 'safety' research. It's really fucking hard to figure out where this is actually going, when behind the eyes of everyone who predicts it, you can see a decade of LessWrong discussions framing their thoughts and you can see that their major concern is control over the light cone or something.
#ai#at some point in this post i switched to capital letters mode#i think i'm gonna leave it inconsistent lol
34 notes
·
View notes
Text

🇯🇵 アンン JLPTN N5 リソース
🌐 Websites
みなと - Japanese e-Learning
Jhiso - Dictionary
Tofugu - Free resources to learn Japanese
Netflix or Crunchyroll - To watch anime with 日本語 subtitles
isshonihongo - Tumblr
📱Apps
Kanji Study - Android App
ChatGPT - To ask questions, have a conversation, and corrections
📕 マンガ
Shonen Jump Plus - Online 日本語マンガ
東京喰種トーキョーグール #001 [悲劇]
📺 YouTube
Vaughn Gene
Fluent in Japanese. My Approach
JapanesePOD 101
ひらがな
カタカナ
漢字
Particles
22 notes
·
View notes